The article discusses the development and performance of a set of tiny test models trained on the ImageNet-1k dataset, created by Ross Wightman and published on Hugging Face. These models represent various popular architecture families and are designed for quick verification of model functionality, allowing users to download pretrained weights and run inference efficiently, even on less powerful hardware. The models are characterized by their smaller size, lower default resolution, and reduced complexity, typically featuring only one block per stage and narrow widths. They were trained using a recent recipe adapted from MobileNet-v4, which is effective for maximizing accuracy in smaller models. While the top-1 accuracy scores of these models may not be particularly impressive, they are noted for their potential effectiveness in fine-tuning for smaller datasets and applications that require reduced computational resources, such as embedded systems or reinforcement learning tasks. The article provides a detailed summary of the models' performance metrics, including top-1 and top-5 accuracy scores, parameter counts, and throughput rates at a resolution of 160x160 pixels. The results indicate that the models, while small, can still achieve reasonable accuracy levels, with some models performing better at a slightly higher resolution of 192x192 pixels. Additionally, the article outlines the throughput performance of the models when compiled with PyTorch 2.4.1 on an RTX4090 GPU, showcasing the number of inference and training samples processed per second under different compilation modes. This data highlights the efficiency of the models in terms of speed, which is crucial for real-time applications. The article also delves into the unique architectural variations of the models, providing insights into their design and the specific components used in each. For instance, the ByobNet combines elements from EfficientNet, ResNet, and DarkNet, while the ConvNeXt models utilize depth-wise convolutions and different activation functions. The EfficientNet models are noted for their use of various normalization techniques, including BatchNorm, GroupNorm, and LayerNorm. Overall, the article invites the community to explore potential applications for these tiny test models beyond mere testing, emphasizing their versatility and the innovative approaches taken in their design.